Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Malar J ; 22(1): 33, 2023 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-36707822

RESUMEN

BACKGROUND: Microscopic examination is commonly used for malaria diagnosis in the field. However, the lack of well-trained microscopists in malaria-endemic areas impacted the most by the disease is a severe problem. Besides, the examination process is time-consuming and prone to human error. Automated diagnostic systems based on machine learning offer great potential to overcome these problems. This study aims to evaluate Malaria Screener, a smartphone-based application for malaria diagnosis. METHODS: A total of 190 patients were recruited at two sites in rural areas near Khartoum, Sudan. The Malaria Screener mobile application was deployed to screen Giemsa-stained blood smears. Both expert microscopy and nested PCR were performed to use as reference standards. First, Malaria Screener was evaluated using the two reference standards. Then, during post-study experiments, the evaluation was repeated for a newly developed algorithm, PlasmodiumVF-Net. RESULTS: Malaria Screener reached 74.1% (95% CI 63.5-83.0) accuracy in detecting Plasmodium falciparum malaria using expert microscopy as the reference after a threshold calibration. It reached 71.8% (95% CI 61.0-81.0) accuracy when compared with PCR. The achieved accuracies meet the WHO Level 3 requirement for parasite detection. The processing time for each smear varies from 5 to 15 min, depending on the concentration of white blood cells (WBCs). In the post-study experiment, Malaria Screener reached 91.8% (95% CI 83.8-96.6) accuracy when patient-level results were calculated with a different method. This accuracy meets the WHO Level 1 requirement for parasite detection. In addition, PlasmodiumVF-Net, a newly developed algorithm, reached 83.1% (95% CI 77.0-88.1) accuracy when compared with expert microscopy and 81.0% (95% CI 74.6-86.3) accuracy when compared with PCR, reaching the WHO Level 2 requirement for detecting both Plasmodium falciparum and Plasmodium vivax malaria, without using the testing sites data for training or calibration. Results reported for both Malaria Screener and PlasmodiumVF-Net used thick smears for diagnosis. In this paper, both systems were not assessed in species identification and parasite counting, which are still under development. CONCLUSION: Malaria Screener showed the potential to be deployed in resource-limited areas to facilitate routine malaria screening. It is the first smartphone-based system for malaria diagnosis evaluated on the patient-level in a natural field environment. Thus, the results in the field reported here can serve as a reference for future studies.


Asunto(s)
Malaria Falciparum , Malaria Vivax , Malaria , Aplicaciones Móviles , Humanos , Teléfono Inteligente , Malaria/parasitología , Malaria Falciparum/diagnóstico , Malaria Falciparum/parasitología , Malaria Vivax/diagnóstico , Plasmodium falciparum , Sensibilidad y Especificidad , Plasmodium vivax
2.
Quant Imaging Med Surg ; 12(1): 675-687, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34993110

RESUMEN

BACKGROUND: Tuberculosis (TB) drug resistance is a worldwide public health problem that threatens progress made in TB care and control. Early detection of drug resistance is important for disease control, with discrimination between drug-resistant TB (DR-TB) and drug-sensitive TB (DS-TB) still being an open problem. The objective of this work is to investigate the relevance of readily available clinical data and data derived from chest X-rays (CXRs) in DR-TB prediction and to investigate the possibility of applying machine learning techniques to selected clinical and radiological features for discrimination between DR-TB and DS-TB. We hypothesize that the number of sextants affected by abnormalities such as nodule, cavity, collapse and infiltrate may serve as a radiological feature for DR-TB identification, and that both clinical and radiological features are important factors for machine classification of DR-TB and DS-TB. METHODS: We use data from the NIAID TB Portals program (https://tbportals.niaid.nih.gov), 1,455 DR-TB cases and 782 DS-TB cases from 11 countries. We first select three clinical features and 26 radiological features from the dataset. Then, we perform Pearson's chi-squared test to analyze the significance of the selected clinical and radiological features. Finally, we train machine classifiers based on different features and evaluate their ability to differentiate between DR-TB and DS-TB. RESULTS: Pearson's chi-squared test shows that two clinical features and 23 radiological features are statistically significant regarding DR-TB vs. DS-TB. A ten-fold cross-validation using a support vector machine shows that automatic discrimination between DR-TB and DS-TB achieves an average accuracy of 72.34% and an average AUC value of 78.42%, when combing all 25 statistically significant features. CONCLUSIONS: Our study suggests that the number of affected lung sextants can be used for predicting DR-TB, and that automatic discrimination between DR-TB and DS-TB is possible, with a combination of clinical features and radiological features providing the best performance.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2964-2967, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891867

RESUMEN

Tuberculosis (TB) is a serious infectious disease that mainly affects the lungs. Drug resistance to the disease makes it more challenging to control. Early diagnosis of drug resistance can help with decision making resulting in appropriate and successful treatment. Chest X-rays (CXRs) have been pivotal to identifying tuberculosis and are widely available. In this work, we utilize CXRs to distinguish between drug-resistant and drug-sensitive tuberculosis. We incorporate Convolutional Neural Network (CNN) based models to discriminate the two types of TB, and employ standard and deep learning based data augmentation methods to improve the classification. Using labeled data from NIAID TB Portals and additional non-labeled sources, we were able to achieve an Area Under the ROC Curve (AUC) of up to 85% using a pretrained InceptionV3 network.


Asunto(s)
Tuberculosis Resistente a Múltiples Medicamentos , Tuberculosis , Área Bajo la Curva , Humanos , Redes Neurales de la Computación , Radiografía , Tuberculosis Resistente a Múltiples Medicamentos/diagnóstico por imagen , Tuberculosis Resistente a Múltiples Medicamentos/tratamiento farmacológico
4.
Diagnostics (Basel) ; 11(11)2021 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-34829341

RESUMEN

We propose a new framework, PlasmodiumVF-Net, to analyze thick smear microscopy images for a malaria diagnosis on both image and patient-level. Our framework detects whether a patient is infected, and in case of a malarial infection, reports whether the patient is infected by Plasmodium falciparum or Plasmodium vivax. PlasmodiumVF-Net first detects candidates for Plasmodium parasites using a Mask Regional-Convolutional Neural Network (Mask R-CNN), filters out false positives using a ResNet50 classifier, and then follows a new approach to recognize parasite species based on a score obtained from the number of detected patches and their aggregated probabilities for all of the patient images. Reporting a patient-level decision is highly challenging, and therefore reported less often in the literature, due to the small size of detected parasites, the similarity to staining artifacts, the similarity of species in different development stages, and illumination or color variations on patient-level. We use a manually annotated dataset consisting of 350 patients, with about 6000 images, which we make publicly available together with this manuscript. Our framework achieves an overall accuracy above 90% on image and patient-level.

5.
Proc IAPR Int Conf Pattern Recogn ; 2020: 4317-4323, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34651146

RESUMEN

Characterizing the spatial relationship between blood vessel and lymphatic vascular structures, in the mice dura mater tissue, is useful for modeling fluid flows and changes in dynamics in various disease processes. We propose a new deep learning-based approach to fuse a set of multi-channel single-focus microscopy images within each volumetric z-stack into a single fused image that accurately captures as much of the vascular structures as possible. The red spectral channel captures small blood vessels and the green fluorescence channel images lymphatics structures in the intact dura mater attached to bone. The deep architecture Multi-Channel Fusion U-Net (MCFU-Net) combines multi-slice regression likelihood maps of thin linear structures using max pooling for each channel independently to estimate a slice-based focus selection map. We compare MCFU-Net with a widely used derivative-based multi-scale Hessian fusion method [8]. The multi-scale Hessian-based fusion produces dark-halos, non-homogeneous backgrounds and less detailed anatomical structures. Perception based no-reference image quality assessment metrics PIQUE, NIQE, and BRISQUE confirm the effectiveness of the proposed method.

6.
Front Neural Circuits ; 15: 690475, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34248505

RESUMEN

Precise positioning of neurons resulting from cell division and migration during development is critical for normal brain function. Disruption of neuronal migration can cause a myriad of neurological disorders. To investigate the functional consequences of defective neuronal positioning on circuit function, we studied a zebrafish frizzled3a (fzd3a) loss-of-function mutant off-limits (olt) where the facial branchiomotor (FBM) neurons fail to migrate out of their birthplace. A jaw movement assay, which measures the opening of the zebrafish jaw (gape), showed that the frequency of gape events, but not their amplitude, was decreased in olt mutants. Consistent with this, a larval feeding assay revealed decreased food intake in olt mutants, indicating that the FBM circuit in mutants generates defective functional outputs. We tested various mechanisms that could generate defective functional outputs in mutants. While fzd3a is ubiquitously expressed in neural and non-neural tissues, jaw cartilage and muscle developed normally in olt mutants, and muscle function also appeared to be unaffected. Although FBM neurons were mispositioned in olt mutants, axon pathfinding to jaw muscles was unaffected. Moreover, neuromuscular junctions established by FBM neurons on jaw muscles were similar between wildtype siblings and olt mutants. Interestingly, motor axons innervating the interhyoideus jaw muscle were frequently defasciculated in olt mutants. Furthermore, GCaMP imaging revealed that mutant FBM neurons were less active than their wildtype counterparts. These data show that aberrant positioning of FBM neurons in olt mutants is correlated with subtle defects in fasciculation and neuronal activity, potentially generating defective functional outputs.


Asunto(s)
Neuronas Motoras , Pez Cebra , Animales , Axones , Movimiento Celular , Neurogénesis , Proteínas de Pez Cebra/genética
7.
IEEE Appl Imag Pattern Recognit Workshop ; 2021: 9762109, 2021 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-36483328

RESUMEN

Malaria is a major health threat caused by Plasmodium parasites that infect the red blood cells. Two predominant types of Plasmodium parasites are Plasmodium vivax (P. vivax) and Plasmodium falciparum (P. falciparum). Diagnosis of malaria typically involves visual microscopy examination of blood smears for malaria parasites. This is a tedious, error-prone visual inspection task requiring microscopy expertise which is often lacking in resource-poor settings. To address these problems, attempts have been made in recent years to automate malaria diagnosis using machine learning approaches. Several challenges need to be met for a machine learning approach to be successful in malaria diagnosis. Microscopy images acquired at different sites often vary in color, contrast, and consistency caused by different smear preparation and staining methods. Moreover, touching and overlapping cells complicate the red blood cell detection process, which can lead to inaccurate blood cell counts and thus incorrect parasitemia calculations. In this work, we propose a red blood cell detection and extraction framework to enable processing and analysis of single cells for follow-up processes like counting infected cells or identifying parasite species in thin blood smears. This framework consists of two modules: a cell detection module and a cell extraction module. The cell detection module trains a modified Channel-wise Feature Pyramid Network for Medicine (CFPNet-M) deep learning network that takes the green channel of the image and the color-deconvolution processed image as inputs, and learns a truncated distance transform image of cell annotations. CFPNet-M is chosen due to its low resource requirements, while the distance transform allows achieving more accurate cell counts for dense cells. Once the cells are detected by the network, the cell extraction module is used to extract single cells from the original image and count the number of cells. Our preliminary results based on 193 patients (including 148 P. Falciparum infected patients, and 45 uninfected patients) show that our framework achieves cell count accuracy of 92.2%.

8.
IEEE J Biomed Health Inform ; 25(5): 1735-1746, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33119516

RESUMEN

Computer-assisted algorithms have become a mainstay of biomedical applications to improve accuracy and reproducibility of repetitive tasks like manual segmentation and annotation. We propose a novel pipeline for red blood cell detection and counting in thin blood smear microscopy images, named RBCNet, using a dual deep learning architecture. RBCNet consists of a U-Net first stage for cell-cluster or superpixel segmentation, followed by a second refinement stage Faster R-CNN for detecting small cell objects within the connected component clusters. RBCNet uses cell clustering instead of region proposals, which is robust to cell fragmentation, is highly scalable for detecting small objects or fine scale morphological structures in very large images, can be trained using non-overlapping tiles, and during inference is adaptive to the scale of cell-clusters with a low memory footprint. We tested our method on an archived collection of human malaria smears with nearly 200,000 labeled cells across 965 images from 193 patients, acquired in Bangladesh, with each patient contributing five images. Cell detection accuracy using RBCNet was higher than 97 %. The novel dual cascade RBCNet architecture provides more accurate cell detections because the foreground cell-cluster masks from U-Net adaptively guide the detection stage, resulting in a notably higher true positive and lower false alarm rates, compared to traditional and other deep learning methods. The RBCNet pipeline implements a crucial step towards automated malaria diagnosis.


Asunto(s)
Aprendizaje Profundo , Malaria , Análisis por Conglomerados , Eritrocitos , Humanos , Procesamiento de Imagen Asistido por Computador , Malaria/diagnóstico , Reproducibilidad de los Resultados
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2736-2739, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440967

RESUMEN

Automatic segmentation of vascular network is a critical step in quantitatively characterizing vessel remodeling in retinal images and other tissues. We proposed a deep learning architecture consists of 14 layers to extract blood vessels in fundoscopy images for the popular standard datasets DRIVE and STARE. Experimental results show that our CNN characterized by superior identifying for the foreground vessel regions. It produces results with sensitivity higher by 10% than other methods when trained by the same data set and more than 1% with cross training (trained on DRIVE, tested with STARE and vice versa). Further, our results have better accuracy $> 0 .95$% compared to state of the art algorithms.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Redes Neurales de la Computación , Vasos Retinianos/diagnóstico por imagen , Humanos , Vasos Retinianos/cirugía
10.
Artículo en Inglés | MEDLINE | ID: mdl-32123642

RESUMEN

Segmentation and quantification of microvasculature structures are the main steps toward studying microvasculature remodeling. The proposed patch based semantic architecture enables accurate segmentation for the challenging epifluorescence microscopy images. Our pixel-based fast semantic network trained on random patches from different epifluorescence images to learn how to discriminate between vessels versus nonvessels pixels. The proposed semantic vessel network (SVNet) relies on understanding the morphological structure of the thin vessels in the patches rather than considering the whole image as input to speed up the training process and to maintain the clarity of thin structures. Experimental results on our ovariectomized - ovary removed (OVX) - mice dura mater epifluorescence microscopy images shows promising results in both arteriole and venule part. We compared our results with different segmentation methods such as local, global thresholding, matched based filter approaches and related state of the art deep learning networks. Our overall accuracy (> 98%) outperforms all the methods including our previous work (VNet). [1].

11.
Artículo en Inglés | MEDLINE | ID: mdl-29152413

RESUMEN

In this paper, we consider confocal microscopy based vessel segmentation with optimized features and random forest classification. By utilizing multi-scale vessel-specific features tuned to capture curvilinear structures such as Frobenius norm of the Hessian eigenvalues, Laplacian of Gaussians (LoG), oriented second derivative, line detector and intensity masked with LoG scale map. we obtain better segmentation results in challenging imaging conditions. We obtain binary segmentations using random forest classifier trained on physiologists marked ground-truth. Experimental results on mice dura mater confocal microscopy vessel segmentations indicate that we obtain better results compared to global segmentation approaches.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 2901-2904, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28261007

RESUMEN

Automatic segmentation of microvascular structures is a critical step in quantitatively characterizing vessel remodeling and other physiological changes in the dura mater or other tissues. We developed a supervised random forest (RF) classifier for segmenting thin vessel structures using multiscale features based on Hessian, oriented second derivatives, Laplacian of Gaussian and line features. The latter multiscale line detector feature helps in detecting and connecting faint vessel structures that would otherwise be missed. Experimental results on epifluorescence imagery show that the RF approach produces foreground vessel regions that are almost 20 and 25 percent better than Niblack and Otsu threshold-based segmentations respectively.


Asunto(s)
Algoritmos , Duramadre/irrigación sanguínea , Procesamiento de Imagen Asistido por Computador/métodos , Microvasos/anatomía & histología , Imagen Óptica/métodos , Animales , Duramadre/anatomía & histología , Ratones , Microvasos/fisiología , Imagen Óptica/mortalidad , Remodelación Vascular
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 5913-5916, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28261011

RESUMEN

Commonly used drawing tools for interactive image segmentation and labeling include active contours or boundaries, scribbles, rectangles and other shapes. Thin vessel shapes in images of vascular networks are difficult to segment using automatic or interactive methods. This paper introduces the novel use of a sparse set of user-defined seed points (supervised labels) for precisely, quickly and robustly segmenting complex biomedical images. A multiquadric spline-based binary classifier is proposed as a unique approach for interactive segmentation using as features color values and the location of seed points. Epifluorescence imagery of the dura mater microvasculature are difficult to segment for quantitative applications due to challenging tissue preparation, imaging conditions, and thin, faint structures. Experimental results based on twenty epifluorescence images is used to illustrate the benefits of using a set of seed points to obtain fast and accurate interactive segmentation compared to four interactive and automatic segmentation approaches.


Asunto(s)
Algoritmos , Duramadre/irrigación sanguínea , Procesamiento de Imagen Asistido por Computador/métodos , Microvasos/anatomía & histología , Animales , Ratones , Microvasos/diagnóstico por imagen , Imagen Óptica/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...